12 research outputs found

    A Parallel Hardware Architecture For Quantum Annealing Algorithm Acceleration

    Get PDF
    Quantum Annealing (QA) is an emerging technique, derived from Simulated Annealing, providing metaheuristics for multivariable optimisation problems. Studies have shown that it can be applied to solve NP-hard problems with faster convergence and better quality of result than other traditional heuristics, with potential applications in a variety of fields, from transport logistics to circuit synthesis and optimisation. In this paper, we present a hardware architecture implementing a QA-based solver for the Multidimensional Knapsack Problem, designed to improve the performance of the algorithm by exploiting parallelised computation. We synthesised the architecture using as a target an Altera FPGA board and simulated the execution for solving a set of benchmarks available in the literature. Simulation results show that the proposed implementation is about 100 times faster than a single-thread general-purpose CPU without impact on the accuracy of the solution

    Techniques for improving localization applications running on low-cost IoT devices

    Get PDF
    Nowadays, localization features are widespread in low-cost and low-power IoT applications such as bike-sharing,off-road vehicle fleet management, and theft prevention of smart devices. For such use cases, since the item to be tracked is inexpensive, older or power-constrained (e.g. battery-powered vehicles), localization features are realized by the installation of low-cost and low-power devices. In this paper, we describe a set of low-computational power techniques, targeting low-cost IoT devices, to process GPS and INS data for accomplishing specific and accurate localization and tracking tasks. The methods here proposed address the calibration of low-cost INS comprised of accelerometer and gyroscope without the aid of external sensors, correction of GPS drift when the target position is static,and the minimization of localization error at device boot. The performances of the proposed methods are then evaluated on several datasets acquired on the field and representing real use-case scenarios

    Spike encoding techniques for IoT time-varying signals benchmarked on a neuromorphic classification task

    Get PDF
    Spiking Neural Networks (SNNs), known for their potential to enable low energy consumption and computational cost, can bring significant advantages to the realm of embedded machine learning for edge applications. However, input coming from standard digital sensors must be encoded into spike trains before it can be elaborated with neuromorphic computing technologies. We present here a detailed comparison of available spike encoding techniques for the translation of time-varying signals into the event-based signal domain, tested on two different datasets both acquired through commercially available digital devices: the Free Spoken Digit dataset (FSD), consisting of 8-kHz audio files, and the WISDM dataset, composed of 20-Hz recordings of human activity through mobile and wearable inertial sensors. We propose a complete pipeline to benchmark these encoding techniques by performing time-dependent signal classification through a Spiking Convolutional Neural Network (sCNN), including a signal preprocessing step consisting of a bank of filters inspired by the human cochlea, feature extraction by production of a sonogram, transfer learning via an equivalent ANN, and model compression schemes aimed at resource optimization. The resulting performance comparison and analysis provides a powerful practical tool, empowering developers to select the most suitable coding method based on the type of data and the desired processing algorithms, and further expands the applicability of neuromorphic computational paradigms to embedded sensor systems widely employed in the IoT and industrial domains

    Human activity recognition: suitability of a neuromorphic approach for on-edge AIoT applications

    Get PDF
    Human activity recognition (HAR) is a classification problem involving time-dependent signals produced by body monitoring, and its application domain covers all the aspects of human life, from healthcare to sport, from safety to smart environments. As such, it is naturally well suited for on-edge deployment of personalized point-of-care (POC) analyses or other tailored services for the user. However, typical smart and wearable devices suffer from relevant limitations regarding energy consumption, and this significantly hinders the possibility for successful employment of edge computing for tasks like HAR. In this paper, we investigate how this problem can be mitigated by adopting a neuromorphic approach. By comparing optimized classifiers based on traditional deep neural network (DNN) architectures as well as on recent alternatives like the Legendre Memory Unit (LMU), we show how spiking neural networks (SNNs) can effectively deal with the temporal signals typical of HAR providing high performances at a low energy cost. By carrying out an application-oriented hyperparameter optimization, we also propose a methodology flexible to be extended to different domains, to enlarge the field of neuro-inspired classifier suitable for on-edge artificial intelligence of things (AIoT) applications

    Smart Traffic Light Control on Edge in IOT-Regulated Intersections

    Get PDF
    Traffic is a well-known everyday problem that standard traffic lights controllers can struggle to deal with, especially in highly populated cities, resulting in congestion at the intersections and the consequent formation of queues.Smart traffic lights management, relying on Internet of Things (IoT) concepts and devices, may be adopted to mitigate this phenomenon. In this paper, we propose a Smart Intersection for Smart Traffic (SIST) regulated model using the max-pressure controller algorithm to dynamically modulate the duration of traffic lights, implemented on real-time embedded hardware and using data coming from local sensors and the IoT network.Compared to standard, fixed-duration control schemes, the dynamically IoT-regulated SIST model ensures overall reduction of the queue lengths, resulting in improved prevention of link overload by about 7% compared to the most favorable fixed-duration model

    PageRank Implemented with the MPI Paradigm Running on a Many-Core Neuromorphic Platform

    Get PDF
    SpiNNaker is a neuromorphic hardware platform, especially designed for the simulation of Spiking Neural Networks (SNNs). To this end, the platform features massively parallel computation and an efficient communication infrastructure based on the transmission of small packets. The effectiveness of SpiNNaker in the parallel execution of the PageRank (PR) algorithm has been tested by the realization of a custom SNN implementation. In this work, we propose a PageRank implementation fully realized with the MPI programming paradigm ported to the SpiNNaker platform. We compare the scalability of the proposed program with the equivalent SNN implementation, and we leverage the characteristics of the PageRank algorithm to benchmark our implementation of MPI on SpiNNaker when faced with massive communication requirements. Experimental results show that the algorithm exhibits favorable scaling for a mid-sized execution context, while highlighting that the performance of MPI-PageRank on SpiNNaker is bounded by memory size and speed limitations on the current version of the hardware

    Configuring an Embedded Neuromorphic coprocessor using a RISC-V chip for enabling edge computing applications

    No full text
    Neuromorphic hardware shows promising potential for employment in edge computing applications, as it can provide real-time and low-power elaboration of complex data directly on edge using computational paradigm based on Spiking Neural Networks (SNNs). However, such systems cannot be deployed as edge devices by themselves, as they require an external host for configuration and data input management. In this paper, we present a chip-level integrated system performing on-edge configuration of a neuromorphic platform. The proposed solution makes use of two existing open-source platforms: the low-power RISC-V processor Rocket Chip and the digital SNN processor ODIN. We built the two systems into a single SoC using the Chipyard framework, and connected them by designing a communication interface using ODIN's SPI and AER input/output ports. We validated the system by RTL simulation of a synfire chain running on ODIN, where Rocket Chip sets up configuration of the network, triggers the first spike, then collects the simulation results. The synthesized design utilizes a modest amount of resources on a PYNQ-Z2 board: 16% of LUT slices, 11% of Block RAMs and 8 pins, leaving plenty of room to integrate other peripherals or systems. The present work represents a first step towards seamless integration of neuromorphic technologies with state-of-the-art processors, improving on the ease of use of neuromorphic devices and leading the way into widespread use of SNN coprocessors in edge computing applications

    A Parallel Hardware Architecture For Quantum Annealing Algorithm Acceleration

    No full text
    Quantum Annealing (QA) is an emerging technique, derived from Simulated Annealing, providing metaheuristics for multivariable optimisation problems. Studies have shown that it can be applied to solve NP-hard problems with faster convergence and better quality of result than other traditional heuristics, with potential applications in a variety of fields, from transport logistics to circuit synthesis and optimisation. In this paper, we present a hardware architecture implementing a QA-based solver for the Multidimensional Knapsack Problem, designed to improve the performance of the algorithm by exploiting parallelised computation. We synthesised the architecture using as a target an Altera FPGA board and simulated the execution for solving a set of benchmarks available in the literature. Simulation results show that the proposed implementation is about 100 times faster than a single-thread general-purpose CPU without impact on the accuracy of the solution

    Constraint Satisfaction Problems solution through Spiking Neural Networks with improved reliability: the case of Sudoku puzzles

    No full text
    Constraint satisfaction problems (CSPs) are a subset of NP-Complete problems: they belong to the NP (nondeterministic polynomial-time) class, a common example of CSP is the latin square problem in the variant of Sudoku puzzles. A possible method to find a solution is through a neuromorphic approach as shown in. Specifically, a stochastic version of a Spiking Neural Network (SNN) made of Leaky Integrate-and-Fire (LIF) neurons and with a structure defined over the mathematical formulation of the CSP can be employed. Such a network describes a system that stochastically evolves towards the solution of the Sudoku puzzle, following the attractor dynamics, in the configuration space. Despite showing the capability of solving this and other CSP problems, presents some limitations that should be addressed: (i) each solution attempt requires a pre-set simulation time that cannot be modified once started, thus hindering the possibility of stopping the process once the solution has been found and introducing unnecessary energy consumption; (ii) the validation method is carried out through the binning process, which consists in extracting the spikes from the neuromorphic platform and analyzing its state through an external platform which inherently implies additional energy consumption due to data preparation and transfer; (iii) the original problem through a mapping process is encoded in the SNN network, given the complexity of some problem classes there may be limitations on the reliability of the network in modeling the initial addresses, observing a modification of these elements during the simulation. Here we propose a fully spiking pipeline able to find a solution, validate it and stop the generation of spikes

    Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern Recognition on Neuromorphic Hardware

    Get PDF
    Spatio-temporal pattern recognition is a fundamental ability of the brain which is required for numerous real-world applications. Recent deep learning approaches have reached outstanding accuracy in such tasks, but their implementation on conventional embedded solutions is still very computationally and energy expensive. Tactile sensing in robotic applications is a representative example where real-time processing and energy-efficiency are required. Following a brain-inspired computing approach, we propose a new benchmark for spatio-temporal tactile pattern recognition at the edge through braille letters reading. We recorded a new braille letters dataset based on the capacitive tactile sensors/fingertip of the iCub robot, then we investigated the importance of temporal information and the impact of event-based encoding for spike-based/event-based computation. Afterwards, we trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihi neuromorphic chip for fast and efficient inference. We confronted our approach to standard classifiers, in particular to a Long Short-Term Memory (LSTM) deployed on the embedded Nvidia Jetson GPU in terms of classification accuracy, power/energy consumption and computational delay. Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy-efficient than the LSTM on Jetson, requiring an average power of only 31mW. This work proposes a new benchmark for tactile sensing and highlights the challenges and opportunities of event-based encoding, neuromorphic hardware and spike-based computing for spatio-temporal pattern recognition at the edge.Comment: 20 pages, submitted to Frontiers in Neuroscience - Neuromorphic Engineerin
    corecore